1 Summary

According to the Cleveland Clinic, Up to half of all women will experience fibrocystic changes that cause noncancerous breast lumps at some point in their lives. Unfortunately, a minority of these common cases spread to distant sites via the bloodstream or the lymphatic system, and thus there is an obvious impetus for accurate and reliable predictive screening.

While this dataset has had many applications, Our project specifically aims to investigate whether supervised machine learning methods are capable of predicting and differentiating benign & malignant breast cancer tumors with sufficiently strong recall in the context of a test class split that is emblematic of the true population proportion at scale.

We employed clustering and Random Forest methods on a dataset obtained from the UCI Machine Learning Repository. This dataset was created by Dr. William H. Wolberg from the University of Wisconsin, which has binary classification and includes features computed from digitized images of tumor biopsies.

After exploratory analysis, we analyzed each method and formed an initial hypothesis that we could optimize tuning parameters to minimize false negatives and achieve a statistically significant confidence interval in prediction.


1.1 Data Validation

Load libraries

Read in the data.

cancer = read.csv("./cancer.csv")
cancer_default = read.csv("./cancer.csv")
#cancer <- read.csv("/cloud/project/cancer.csv")

1.1.1 First 6 Rows

head(cancer)
##         id diagnosis radius_mean texture_mean perimeter_mean area_mean
## 1   842302         M       17.99        10.38         122.80    1001.0
## 2   842517         M       20.57        17.77         132.90    1326.0
## 3 84300903         M       19.69        21.25         130.00    1203.0
## 4 84348301         M       11.42        20.38          77.58     386.1
## 5 84358402         M       20.29        14.34         135.10    1297.0
## 6   843786         M       12.45        15.70          82.57     477.1
##   smoothness_mean compactness_mean concavity_mean concave.points_mean
## 1         0.11840          0.27760         0.3001             0.14710
## 2         0.08474          0.07864         0.0869             0.07017
## 3         0.10960          0.15990         0.1974             0.12790
## 4         0.14250          0.28390         0.2414             0.10520
## 5         0.10030          0.13280         0.1980             0.10430
## 6         0.12780          0.17000         0.1578             0.08089
##   symmetry_mean fractal_dimension_mean radius_se texture_se perimeter_se
## 1        0.2419                0.07871    1.0950     0.9053        8.589
## 2        0.1812                0.05667    0.5435     0.7339        3.398
## 3        0.2069                0.05999    0.7456     0.7869        4.585
## 4        0.2597                0.09744    0.4956     1.1560        3.445
## 5        0.1809                0.05883    0.7572     0.7813        5.438
## 6        0.2087                0.07613    0.3345     0.8902        2.217
##   area_se smoothness_se compactness_se concavity_se concave.points_se
## 1  153.40      0.006399        0.04904      0.05373           0.01587
## 2   74.08      0.005225        0.01308      0.01860           0.01340
## 3   94.03      0.006150        0.04006      0.03832           0.02058
## 4   27.23      0.009110        0.07458      0.05661           0.01867
## 5   94.44      0.011490        0.02461      0.05688           0.01885
## 6   27.19      0.007510        0.03345      0.03672           0.01137
##   symmetry_se fractal_dimension_se radius_worst texture_worst perimeter_worst
## 1     0.03003             0.006193        25.38         17.33          184.60
## 2     0.01389             0.003532        24.99         23.41          158.80
## 3     0.02250             0.004571        23.57         25.53          152.50
## 4     0.05963             0.009208        14.91         26.50           98.87
## 5     0.01756             0.005115        22.54         16.67          152.20
## 6     0.02165             0.005082        15.47         23.75          103.40
##   area_worst smoothness_worst compactness_worst concavity_worst
## 1     2019.0           0.1622            0.6656          0.7119
## 2     1956.0           0.1238            0.1866          0.2416
## 3     1709.0           0.1444            0.4245          0.4504
## 4      567.7           0.2098            0.8663          0.6869
## 5     1575.0           0.1374            0.2050          0.4000
## 6      741.6           0.1791            0.5249          0.5355
##   concave.points_worst symmetry_worst fractal_dimension_worst  X
## 1               0.2654         0.4601                 0.11890 NA
## 2               0.1860         0.2750                 0.08902 NA
## 3               0.2430         0.3613                 0.08758 NA
## 4               0.2575         0.6638                 0.17300 NA
## 5               0.1625         0.2364                 0.07678 NA
## 6               0.1741         0.3985                 0.12440 NA

1.1.2 Kable

#install.packages("kableExtra")
require("kableExtra")
## Loading required package: kableExtra
## 
## Attaching package: 'kableExtra'
## The following object is masked from 'package:dplyr':
## 
##     group_rows
kable(str(cancer))
## 'data.frame':    569 obs. of  33 variables:
##  $ id                     : int  842302 842517 84300903 84348301 84358402 843786 844359 84458202 844981 84501001 ...
##  $ diagnosis              : chr  "M" "M" "M" "M" ...
##  $ radius_mean            : num  18 20.6 19.7 11.4 20.3 ...
##  $ texture_mean           : num  10.4 17.8 21.2 20.4 14.3 ...
##  $ perimeter_mean         : num  122.8 132.9 130 77.6 135.1 ...
##  $ area_mean              : num  1001 1326 1203 386 1297 ...
##  $ smoothness_mean        : num  0.1184 0.0847 0.1096 0.1425 0.1003 ...
##  $ compactness_mean       : num  0.2776 0.0786 0.1599 0.2839 0.1328 ...
##  $ concavity_mean         : num  0.3001 0.0869 0.1974 0.2414 0.198 ...
##  $ concave.points_mean    : num  0.1471 0.0702 0.1279 0.1052 0.1043 ...
##  $ symmetry_mean          : num  0.242 0.181 0.207 0.26 0.181 ...
##  $ fractal_dimension_mean : num  0.0787 0.0567 0.06 0.0974 0.0588 ...
##  $ radius_se              : num  1.095 0.543 0.746 0.496 0.757 ...
##  $ texture_se             : num  0.905 0.734 0.787 1.156 0.781 ...
##  $ perimeter_se           : num  8.59 3.4 4.58 3.44 5.44 ...
##  $ area_se                : num  153.4 74.1 94 27.2 94.4 ...
##  $ smoothness_se          : num  0.0064 0.00522 0.00615 0.00911 0.01149 ...
##  $ compactness_se         : num  0.049 0.0131 0.0401 0.0746 0.0246 ...
##  $ concavity_se           : num  0.0537 0.0186 0.0383 0.0566 0.0569 ...
##  $ concave.points_se      : num  0.0159 0.0134 0.0206 0.0187 0.0188 ...
##  $ symmetry_se            : num  0.03 0.0139 0.0225 0.0596 0.0176 ...
##  $ fractal_dimension_se   : num  0.00619 0.00353 0.00457 0.00921 0.00511 ...
##  $ radius_worst           : num  25.4 25 23.6 14.9 22.5 ...
##  $ texture_worst          : num  17.3 23.4 25.5 26.5 16.7 ...
##  $ perimeter_worst        : num  184.6 158.8 152.5 98.9 152.2 ...
##  $ area_worst             : num  2019 1956 1709 568 1575 ...
##  $ smoothness_worst       : num  0.162 0.124 0.144 0.21 0.137 ...
##  $ compactness_worst      : num  0.666 0.187 0.424 0.866 0.205 ...
##  $ concavity_worst        : num  0.712 0.242 0.45 0.687 0.4 ...
##  $ concave.points_worst   : num  0.265 0.186 0.243 0.258 0.163 ...
##  $ symmetry_worst         : num  0.46 0.275 0.361 0.664 0.236 ...
##  $ fractal_dimension_worst: num  0.1189 0.089 0.0876 0.173 0.0768 ...
##  $ X                      : logi  NA NA NA NA NA NA ...

1.1.3 Data Types and Ranges

The “Diagnosis” categorical variable should be a factor instead of a character. We will have to convert that to a factor. There also seems to be an extra column at the end that is just full of NAs. We will have to delete that column. The ranges of the numerical data also seem to be in order.

#cancer$diagnosis = as.factor(cancer$diagnosis)
cancer = cancer[, -c(33)]
cancer = cancer[, -c(1)]

cancer$diagnosis[cancer$diagnosis == "M"] <- 1
cancer$diagnosis[cancer$diagnosis == "B"] <- 0

cancer$diagnosis = as.factor(cancer$diagnosis)

1.1.4 Duplicates / Missing Values / Nulls

#unique(cancer$radius_mean) #No radiuses were duplicated
dim(cancer[duplicated(cancer$id),])[1] #No ID numbers were duplicated
## [1] 0

No missing data other than the last column full of NAs that we already deleted

cancer<-na.omit(cancer)

No nulls present in the data after removing the final column

Final cleaned up dataset:

str(cancer)
## 'data.frame':    569 obs. of  31 variables:
##  $ diagnosis              : Factor w/ 2 levels "0","1": 2 2 2 2 2 2 2 2 2 2 ...
##  $ radius_mean            : num  18 20.6 19.7 11.4 20.3 ...
##  $ texture_mean           : num  10.4 17.8 21.2 20.4 14.3 ...
##  $ perimeter_mean         : num  122.8 132.9 130 77.6 135.1 ...
##  $ area_mean              : num  1001 1326 1203 386 1297 ...
##  $ smoothness_mean        : num  0.1184 0.0847 0.1096 0.1425 0.1003 ...
##  $ compactness_mean       : num  0.2776 0.0786 0.1599 0.2839 0.1328 ...
##  $ concavity_mean         : num  0.3001 0.0869 0.1974 0.2414 0.198 ...
##  $ concave.points_mean    : num  0.1471 0.0702 0.1279 0.1052 0.1043 ...
##  $ symmetry_mean          : num  0.242 0.181 0.207 0.26 0.181 ...
##  $ fractal_dimension_mean : num  0.0787 0.0567 0.06 0.0974 0.0588 ...
##  $ radius_se              : num  1.095 0.543 0.746 0.496 0.757 ...
##  $ texture_se             : num  0.905 0.734 0.787 1.156 0.781 ...
##  $ perimeter_se           : num  8.59 3.4 4.58 3.44 5.44 ...
##  $ area_se                : num  153.4 74.1 94 27.2 94.4 ...
##  $ smoothness_se          : num  0.0064 0.00522 0.00615 0.00911 0.01149 ...
##  $ compactness_se         : num  0.049 0.0131 0.0401 0.0746 0.0246 ...
##  $ concavity_se           : num  0.0537 0.0186 0.0383 0.0566 0.0569 ...
##  $ concave.points_se      : num  0.0159 0.0134 0.0206 0.0187 0.0188 ...
##  $ symmetry_se            : num  0.03 0.0139 0.0225 0.0596 0.0176 ...
##  $ fractal_dimension_se   : num  0.00619 0.00353 0.00457 0.00921 0.00511 ...
##  $ radius_worst           : num  25.4 25 23.6 14.9 22.5 ...
##  $ texture_worst          : num  17.3 23.4 25.5 26.5 16.7 ...
##  $ perimeter_worst        : num  184.6 158.8 152.5 98.9 152.2 ...
##  $ area_worst             : num  2019 1956 1709 568 1575 ...
##  $ smoothness_worst       : num  0.162 0.124 0.144 0.21 0.137 ...
##  $ compactness_worst      : num  0.666 0.187 0.424 0.866 0.205 ...
##  $ concavity_worst        : num  0.712 0.242 0.45 0.687 0.4 ...
##  $ concave.points_worst   : num  0.265 0.186 0.243 0.258 0.163 ...
##  $ symmetry_worst         : num  0.46 0.275 0.361 0.664 0.236 ...
##  $ fractal_dimension_worst: num  0.1189 0.089 0.0876 0.173 0.0768 ...

1.2 Averages

## 
##   0   1 
## 357 212
##   radius_mean     smoothness_mean   compactness_mean 
##  Min.   : 6.981   Min.   :0.05263   Min.   :0.01938  
##  1st Qu.:11.700   1st Qu.:0.08637   1st Qu.:0.06492  
##  Median :13.370   Median :0.09587   Median :0.09263  
##  Mean   :14.127   Mean   :0.09636   Mean   :0.10434  
##  3rd Qu.:15.780   3rd Qu.:0.10530   3rd Qu.:0.13040  
##  Max.   :28.110   Max.   :0.16340   Max.   :0.34540

We observe seemingly high density around the mean and thus investigate the distribution of these key variables below:

1.3 Analyzing key feature distributions

We observe skews with key impact features here. Attributes with a large value range have a higher impact on distance than attributes with a small value range, which nullifies clustering efficacy. In order to solve this, we map each attribute value proportionally to the same value interval, so as to balance the influence of each attribute on the distance without compromising each feature’s distribution profile.

2 Plots/Graphs

2.1 Correlation Matrix

First, we’ll be creating a correlation matrix to get a sense of which variables are the most correlated with others. We’re particularly interested in size based variables such as radius_mean, perimeter, and area.

cancer_cor = cancer
cancer_cor$diagnosis <- as.numeric(cancer_cor$diagnosis) #converting factor to numeric
cancer_cor = round(cor(cancer_cor),1)
  
p.mat <- cor_pmat(cancer_cor)
head(p.mat[, 1:4])
##                    diagnosis  radius_mean texture_mean perimeter_mean
## diagnosis       0.000000e+00 4.843751e-12 0.0206181064   2.208501e-13
## radius_mean     4.843751e-12 0.000000e+00 0.0311828524   1.071189e-28
## texture_mean    2.061811e-02 3.118285e-02 0.0000000000   3.902333e-02
## perimeter_mean  2.208501e-13 1.071189e-28 0.0390233327   0.000000e+00
## area_mean       2.131365e-11 3.376931e-32 0.0330808694   1.583679e-26
## smoothness_mean 9.504911e-01 2.343061e-01 0.0006321165   3.853980e-01
ggcorrplot(cancer_cor)

From looking at the correlation matrix, we can see a few interesting variables to keep in mind along the “Diagnosis” row. It seems that area, mean, and perimeter are in fact some of the most predictive variables of the diagnosis. However, another variable seems to also be somewhat correlated with the diagnosis: concave_points. Using this information, we would like to visualize how each of these variables are related to diagnosis using another plot.

cancer_cor <- cancer
cancer_cor$diagnosis <- as.numeric(cancer_cor$diagnosis) 
cor1 <- cor(x = cancer_cor$diagnosis, y = cancer_cor[2:19], use="complete.obs")
corrplot(cor1)

2.2 Diagnosis Correlation

We’re going to plot each of those four variables we found earlier against diagnosis to have a better visual understanding of what is most predictive of diagnosis. We’re also going to include four plots with lesser correlated variables to provide some contrast.

library(gridExtra)
## 
## Attaching package: 'gridExtra'
## The following object is masked from 'package:randomForest':
## 
##     combine
## The following object is masked from 'package:dplyr':
## 
##     combine
p1 = ggplot(cancer, aes(x=diagnosis, y=radius_mean,color = diagnosis)) +
  geom_point(alpha = 1/20) +
  geom_point(size=0.1) +
  labs(title = "Diagnosis by Radius", x="Diagnosis", y="Radius",) +
  scale_color_manual(labels = c("benign", "malignant"), values = c("green", "red")) +
  scale_x_discrete(labels=c("1" = "malignant", "0" = "benign")) + 
  geom_jitter()

p2 = ggplot(cancer, aes(x=diagnosis, y=area_mean,color = diagnosis)) +
  geom_point(alpha = 1/20) +
  geom_point(size=0.1) +
  labs(title = "Diagnosis by Area", x="Diagnosis", y="Area",) +
  scale_color_manual(labels = c("benign", "malignant"), values = c("green", "red")) +
  scale_x_discrete(labels=c("1" = "malignant", "0" = "benign")) + 
  geom_jitter()

p3 = ggplot(cancer, aes(x=diagnosis, y=perimeter_mean,color = diagnosis)) +
  geom_point(alpha = 1/20) +
  geom_point(size=0.1) +
  labs(title = "Diagnosis by Perimeter", x="Diagnosis", y="Perimeter",) +
  scale_color_manual(labels = c("benign", "malignant"), values = c("green", "red")) +
  scale_x_discrete(labels=c("1" = "malignant", "0" = "benign")) + 
  geom_jitter()

p4 = ggplot(cancer, aes(x=diagnosis, y=concave.points_mean,color = diagnosis)) +
  geom_point(alpha = 1/20) +
  geom_point(size=0.1) +
  labs(title = "Diagnosis by Concave Points", x="Diagnosis", y="Concave Points",) +
  scale_color_manual(labels = c("benign", "malignant"), values = c("green", "red")) +
  scale_x_discrete(labels=c("1" = "malignant", "0" = "benign")) + 
  geom_jitter()

p5 = ggplot(cancer, aes(x=diagnosis, y=smoothness_mean,color = diagnosis)) +
  geom_point(alpha = 1/20) +
  geom_point(size=0.1) +
  labs(title = "Diagnosis by Smoothness", x="Diagnosis", y="Smoothness",) +
  scale_color_manual(labels = c("benign", "malignant"), values = c("green", "red")) +
  scale_x_discrete(labels=c("1" = "malignant", "0" = "benign")) + 
  geom_jitter()

p6 = ggplot(cancer, aes(x=diagnosis, y=texture_mean,color = diagnosis)) +
  geom_point(alpha = 1/20) +
  geom_point(size=0.1) +
  labs(title = "Diagnosis by Texture", x="Diagnosis", y="Texture",) +
  scale_color_manual(labels = c("benign", "malignant"), values = c("green", "red")) +
  scale_x_discrete(labels=c("1" = "malignant", "0" = "benign")) + 
  geom_jitter()

p7 = ggplot(cancer, aes(x=diagnosis, y=compactness_mean,color = diagnosis)) +
  geom_point(alpha = 1/20) +
  geom_point(size=0.1) +
  labs(title = "Diagnosis by Compactness", x="Diagnosis", y="Compactness",) +
  scale_color_manual(labels = c("benign", "malignant"), values = c("green", "red")) +
  scale_x_discrete(labels=c("1" = "malignant", "0" = "benign")) + 
  geom_jitter()

p8 = ggplot(cancer, aes(x=diagnosis, y=fractal_dimension_mean,color = diagnosis)) +
  geom_point(alpha = 1/20) +
  geom_point(size=0.1) +
  labs(title = "Diagnosis by Fractal Dimension", x="Diagnosis", y="Fractal Dimension",) +
  scale_color_manual(labels = c("benign", "malignant"), values = c("green", "red")) +
  scale_x_discrete(labels=c("1" = "malignant", "0" = "benign")) + 
  geom_jitter()

grid.arrange(p1,p2,p3,p4,p5,p6,p7,p8, ncol=2)

We can definitely see from the first four plots (Radius, Area, Perimeter, Concave Points) that they are much better identifiers of cancerous tumors than the other variables.

Most surprisingly, based on the correlation matrix and the plots, the single most predictive variable of cancerous breast tumors was not the radius, perimeter, or the mean. It seems to be the concave points variable.

2.3 Concave Points/Radius Regression

A regression smoother (loess) ggplot can show how concave points could really compliment the radius in the context of models that we might want to build.

ggplot(cancer, aes(x=radius_mean, y=concave.points_mean, col=diagnosis)) + 
  geom_point(alpha=0.75) + 
  labs(
    x='Radius', 
    y='Concave Points',
    title='Radius/Concave Points Regression Smoother (loess)'
  ) +
  geom_smooth(
    method='loess', 
    formula=y~x, 
    se=FALSE
  ) +
  theme(
    plot.title=element_text(hjust = 0.5)
  ) +
  scale_color_manual(labels = c("benign", "malignant"), values = c("cadetblue3", "brown1"))

The concave points seems to work very well with the radius. We can see that nearly all concave_point values greater than 0.07 results in a Malignant diagnosis. The positive linear relationship between concave points and radius makes this a powerful predictor variable that could help us build our models.

3 K-Means Clustering

Before continuing on to our Random Forest model, we’re going to use k-means clustering to check if there are any groups/patterns that we might not yet be aware of. If there are any groups that aren’t explicitly labeled in the data, this should be able to find it. Fir

3.1 Normalization

cancer_kmeans = cancer

# Normalization function
normalize = function(x){
  (x - min(x)) / (max(x) - min(x))
}

cancer_normalize = na.omit(cancer_kmeans)

# Convert to factors
cancer_normalize$diagnosis = as.factor(cancer_normalize$diagnosis)
#str(cancer_normalize)

# Normalize the factors
cancer_cluster = cancer_normalize
cancer_normalize = cancer_normalize[, -c(1)]
#str(cancer_normalize)

cancer_normalize = normalize(cancer_normalize)
#str(cancer_normalize)

3.2 Clustering

# Clustering
# Find the number of clusters using NBCluster

#str(cancer_normalize)

# we want to select the total number of rows, but drop salary later
clust_data_cancer = select_if(cancer_normalize,is.numeric)
# dont include radius
clust_data_cancer = clust_data_cancer[, -c(1)]
#view(clust_data_cancer)

clust_data_cancer = na.omit(clust_data_cancer)
set.seed(1)

kmeans_obj_cancer = kmeans(clust_data_cancer, centers = 2, 
                        algorithm = "Lloyd")

kmeans_obj_cancer
## K-means clustering with 2 clusters of sizes 438, 131
## 
## Cluster means:
##   texture_mean perimeter_mean area_mean smoothness_mean compactness_mean
## 1  0.004365389     0.01906993 0.1166107    2.230477e-05     2.141510e-05
## 2  0.005099807     0.03014370 0.2787799    2.381161e-05     3.493488e-05
##   concavity_mean concave.points_mean symmetry_mean fractal_dimension_mean
## 1   1.467742e-05        7.859084e-06  4.185660e-05           1.491632e-05
## 2   4.159367e-05        2.367155e-05  4.502579e-05           1.424610e-05
##      radius_se   texture_se perimeter_se     area_se smoothness_se
## 1 7.150702e-05 0.0002856496 0.0005060838 0.005591277  1.686239e-06
## 2 1.746130e-04 0.0002873856 0.0012342690 0.022491342  1.551172e-06
##   compactness_se concavity_se concave.points_se  symmetry_se
## 1   5.518263e-06 6.757290e-06      2.500311e-06 4.845693e-06
## 2   7.563868e-06 9.971737e-06      3.684529e-06 4.772912e-06
##   fractal_dimension_se radius_worst texture_worst perimeter_worst area_worst
## 1         8.809363e-07  0.003301340   0.005808543      0.02161201  0.1456624
## 2         9.293346e-07  0.005573452   0.006796585      0.03725815  0.4120881
##   smoothness_worst compactness_worst concavity_worst concave.points_worst
## 1     3.054986e-05      5.249454e-05    5.153149e-05         2.146917e-05
## 2     3.301005e-05      8.409913e-05    1.056197e-04         4.523532e-05
##   symmetry_worst fractal_dimension_worst
## 1   6.665577e-05            1.957733e-05
## 2   7.331492e-05            2.025517e-05
## 
## Clustering vector:
##   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20 
##   2   2   2   1   2   1   2   1   1   1   1   2   2   1   1   1   1   2   2   1 
##  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40 
##   1   1   1   2   2   2   1   2   2   2   2   1   2   2   2   2   1   1   1   1 
##  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60 
##   1   1   2   1   1   2   1   1   1   1   1   1   1   2   1   1   2   1   1   1 
##  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80 
##   1   1   1   1   1   1   1   1   1   1   2   1   2   1   1   2   1   2   2   1 
##  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99 100 
##   1   1   2   2   1   2   1   2   1   1   1   1   1   1   1   2   1   1   1   1 
## 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 
##   1   1   1   1   1   1   1   1   2   1   1   1   1   1   1   1   1   1   2   2 
## 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 
##   1   2   2   1   1   1   1   2   1   2   1   1   1   1   2   1   1   1   1   1 
## 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 
##   1   2   1   1   1   1   1   1   1   1   1   1   1   1   1   1   2   1   1   1 
## 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 
##   1   2   2   1   2   1   1   2   2   1   1   1   1   1   1   1   1   1   1   1 
## 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 
##   2   2   2   1   1   1   2   1   1   1   1   1   1   1   1   1   1   2   2   1 
## 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 
##   1   2   2   1   1   1   1   2   1   1   2   1   2   1   1   1   1   1   2   2 
## 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 
##   1   1   1   1   1   1   1   1   1   1   2   1   1   2   1   1   2   2   1   2 
## 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 
##   1   1   1   1   2   1   1   1   1   1   2   1   2   2   2   1   2   1   2   1 
## 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 
##   2   2   2   1   2   2   1   1   1   1   1   1   2   1   2   1   1   2   1   1 
## 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 
##   2   1   2   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 
##   2   1   2   1   1   1   1   1   1   1   1   1   1   1   1   1   1   2   1   1 
## 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 
##   1   2   1   2   1   1   1   1   1   1   1   1   1   1   1   2   1   2   1   2 
## 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 
##   1   1   1   2   1   1   1   1   1   1   1   1   2   1   1   1   1   1   1   1 
## 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 
##   1   1   1   1   1   2   2   1   2   2   1   1   2   2   1   1   1   1   1   1 
## 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 
##   1   1   1   1   1   1   1   1   1   2   1   1   2   2   1   1   1   1   1   1 
## 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 
##   2   1   1   1   1   1   1   1   2   1   1   1   1   1   1   1   1   2   1   1 
## 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 
##   1   1   1   1   1   1   1   1   1   1   1   1   2   2   1   1   1   1   1   1 
## 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 
##   1   2   1   1   2   1   2   1   1   2   1   2   1   1   1   1   1   1   1   1 
## 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 
##   2   2   1   1   1   1   1   1   2   1   1   1   1   1   1   1   1   1   1   1 
## 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 
##   1   1   1   1   1   1   1   2   1   1   1   2   2   1   1   1   1   1   2   2 
## 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 
##   1   1   1   2   1   1   1   1   1   1   1   1   1   1   1   1   2   2   1   1 
## 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 
##   1   2   1   1   1   1   1   1   1   1   1   1   1   2   1   2   1   1   1   1 
## 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 561 562 563 564 565 566 567 568 569 
##   1   1   1   2   2   2   1   2   1 
## 
## Within cluster sum of squares by cluster:
## [1] 1.578098 2.728849
##  (between_SS / total_SS =  69.6 %)
## 
## Available components:
## 
## [1] "cluster"      "centers"      "totss"        "withinss"     "tot.withinss"
## [6] "betweenss"    "size"         "iter"         "ifault"
#Run Nbcluster
(nbclust_obj_cancer = NbClust(data = clust_data_cancer, method = "kmeans"))

## *** : The Hubert index is a graphical method of determining the number of clusters.
##                 In the plot of Hubert index, we seek a significant knee that corresponds to a 
##                 significant increase of the value of the measure i.e the significant peak in Hubert
##                 index second differences plot. 
## 

## *** : The D index is a graphical method of determining the number of clusters. 
##                 In the plot of D index, we seek a significant knee (the significant peak in Dindex
##                 second differences plot) that corresponds to a significant increase of the value of
##                 the measure. 
##  
## ******************************************************************* 
## * Among all indices:                                                
## * 7 proposed 2 as the best number of clusters 
## * 5 proposed 3 as the best number of clusters 
## * 4 proposed 4 as the best number of clusters 
## * 1 proposed 5 as the best number of clusters 
## * 2 proposed 7 as the best number of clusters 
## * 1 proposed 11 as the best number of clusters 
## * 3 proposed 13 as the best number of clusters 
## * 1 proposed 14 as the best number of clusters 
## 
##                    ***** Conclusion *****                            
##  
## * According to the majority rule, the best number of clusters is  2 
##  
##  
## *******************************************************************
## $All.index
##         KL       CH Hartigan       CCC    Scott Marriot TrCovW TraceW Friedman
## 2   6.4151 1300.213 307.8320   27.2871 5090.463       0 0.5414 4.3069 26825.45
## 3   1.2208 1154.934 412.3101    0.4974 5583.627       0 0.2141 2.7914 27528.24
## 4   2.4242 1465.683 239.1378  -11.8234 6240.724       0 0.0637 1.6150 28135.06
## 5   2.4642 1621.438 118.3121  -22.2010 6679.261       0 0.0279 1.1347 28569.76
## 6   0.9462 1590.095 153.3970  -48.0828 7092.179       0 0.0177 0.9380 29916.25
## 7   2.0941 1708.641  85.2548  -76.9083 7481.638       0 0.0100 0.7371 30252.55
## 8   1.9296 1695.877  49.2088 -109.0963 7758.878       0 0.0071 0.6400 30934.43
## 9   2.0405 1617.316  26.7772 -147.1591 7969.277       0 0.0058 0.5884 31779.37
## 10  2.1876 1506.636  14.2281 -186.7504 8092.253       0 0.0052 0.5616 31934.97
## 11  1.2955 1389.419  11.5957 -225.5569 8394.290       0 0.0049 0.5476 34403.79
## 12  0.2819 1288.098  33.8406 -266.4950 8537.461       0 0.0047 0.5365 36064.34
## 13 71.2971 1253.060   3.2797 -264.2246 8669.231       0 0.0041 0.5057 36225.60
## 14  0.0184 1161.653 -19.7971 -264.8607 8751.786       0 0.0040 0.5028 36670.52
## 15 16.1163 1036.915   1.0851 -267.8029 8796.885       0 0.0044 0.5214 38197.08
##       Rubin Cindex     DB Silhouette   Duda  Pseudot2    Beale Ratkowsky   Ball
## 2   12.2522 0.0915 0.6530     0.6973 0.7143  193.2049   8.1507    0.2900 2.1535
## 3   18.9041 0.0767 0.7596     0.5473 1.6251 -107.7030  -7.8012    0.2651 0.9305
## 4   32.6751 0.0853 0.7288     0.5335 0.8496   43.5522   3.5956    0.2418 0.4037
## 5   46.5050 0.0806 0.7068     0.5119 2.7106  -78.2541 -12.6991    0.2235 0.2269
## 6   56.2605 0.0663 0.7092     0.4708 2.1033 -104.9098 -10.6172    0.2112 0.1563
## 7   71.5894 0.0664 0.7362     0.4792 1.1544  -22.0691  -2.6845    0.2002 0.1053
## 8   82.4494 0.0574 0.7644     0.4609 1.4795  -48.2887  -6.4864    0.1899 0.0800
## 9   89.6816 0.0528 0.7578     0.4425 2.9983  -46.6535 -13.3091    0.1814 0.0654
## 10  93.9698 0.0493 0.7854     0.4251 1.5440  -32.7684  -7.0781    0.1729 0.0562
## 11  96.3616 0.0482 0.7845     0.4207 1.0104   -0.8730  -0.2050    0.1662 0.0498
## 12  98.3641 0.0477 0.8033     0.4142 1.2199   -9.3729  -3.5914    0.1590 0.0447
## 13 104.3402 0.0442 0.8340     0.4041 1.6693  -30.4708  -7.8607    0.1541 0.0389
## 14 104.9557 0.0448 0.8555     0.3839 2.1370  -48.4163 -10.4481    0.1491 0.0359
## 15 101.2119 0.0495 0.9054     0.3694 1.4450  -16.9368  -5.9896    0.1451 0.0348
##    Ptbiserial    Frey McClain   Dunn Hubert  SDindex Dindex   SDbw
## 2      0.7387  3.6134  0.1334 0.0173 0.1017 163.1469 0.0631 0.7254
## 3      0.6019  2.2182  0.2921 0.0071 0.1094 123.0354 0.0505 0.7747
## 4      0.5447  1.3678  0.3861 0.0099 0.1249 116.7342 0.0408 0.5088
## 5      0.5148  2.3498  0.4335 0.0078 0.1293 100.7790 0.0345 0.4332
## 6      0.4395  0.2275  0.5953 0.0050 0.1336 104.1955 0.0295 0.3704
## 7      0.4405  1.9281  0.5626 0.0082 0.1372  98.8234 0.0268 0.2849
## 8      0.3964  2.4866  0.6653 0.0037 0.1383 104.1358 0.0239 0.2683
## 9      0.3619  2.7888  0.7789 0.0055 0.1391 126.3474 0.0221 0.2438
## 10     0.3287  3.6362  0.9252 0.0022 0.1395 147.2848 0.0207 0.2257
## 11     0.3138 11.4927  1.0102 0.0049 0.1401 160.0787 0.0198 0.2024
## 12     0.2904  0.3986  1.1873 0.0067 0.1403 193.5250 0.0190 0.1849
## 13     0.2857 -3.4858  1.1682 0.0036 0.1404 193.9647 0.0181 0.1676
## 14     0.2750 -0.9362  1.2811 0.0036 0.1406 252.1469 0.0179 0.1627
## 15     0.2520 -1.0380  1.6477 0.0056 0.1406 342.5448 0.0180 0.1683
## 
## $All.CriticalValues
##    CritValue_Duda CritValue_PseudoT2 Fvalue_Beale
## 2          0.9385            31.6737            0
## 3          0.9092            27.9489            1
## 4          0.9165            22.4232            0
## 5          0.8783            17.1812            1
## 6          0.9001            22.1961            1
## 7          0.8702            24.6156            1
## 8          0.8620            23.8479            1
## 9          0.8559            11.7865            1
## 10         0.8728            13.5507            1
## 11         0.8545            14.4691            1
## 12         0.8502             9.1619            1
## 13         0.8156            17.1876            1
## 14         0.8186            20.1659            1
## 15         0.8014            13.6296            1
## 
## $Best.nc
##                      KL       CH Hartigan     CCC    Scott Marriot TrCovW
## Number_clusters 13.0000    7.000   4.0000  2.0000   4.0000       4 3.0000
## Value_Index     71.2971 1708.641 173.1723 27.2871 657.0972       0 0.3273
##                 TraceW Friedman   Rubin  Cindex    DB Silhouette   Duda
## Number_clusters 4.0000   11.000 13.0000 13.0000 2.000     2.0000 3.0000
## Value_Index     0.6962 2468.819 -5.3606  0.0442 0.653     0.6973 1.6251
##                 PseudoT2   Beale Ratkowsky  Ball PtBiserial   Frey McClain
## Number_clusters    3.000  3.0000      2.00 3.000     2.0000 5.0000  2.0000
## Value_Index     -107.703 -7.8012      0.29 1.223     0.7387 2.3498  0.1334
##                   Dunn Hubert SDindex Dindex    SDbw
## Number_clusters 2.0000      0  7.0000      0 14.0000
## Value_Index     0.0173      0 98.8234      0  0.1627
## 
## $Best.partition
##   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20 
##   2   2   2   1   2   1   2   1   1   1   1   2   2   1   1   1   1   2   2   1 
##  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40 
##   1   1   1   2   2   2   1   2   2   2   2   1   2   2   2   2   1   1   1   1 
##  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60 
##   1   1   2   1   1   2   1   1   1   1   1   1   1   2   1   1   2   1   1   1 
##  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80 
##   1   1   1   1   1   1   1   1   1   1   2   1   2   1   1   2   1   2   2   1 
##  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99 100 
##   1   1   2   2   1   2   1   2   1   1   1   1   1   1   1   2   1   1   1   1 
## 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 
##   1   1   1   1   1   1   1   1   2   1   1   1   1   1   1   1   1   1   2   2 
## 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 
##   1   2   2   1   1   1   1   2   1   2   1   1   1   1   2   1   1   1   1   1 
## 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 
##   1   2   1   1   1   1   1   1   1   1   1   1   1   1   1   1   2   1   1   1 
## 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 
##   1   2   2   1   2   1   1   2   2   1   1   1   1   1   1   1   1   1   1   1 
## 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 
##   2   2   2   1   1   1   2   1   1   1   1   1   1   1   1   1   1   2   2   1 
## 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 
##   1   2   2   1   1   1   1   2   1   1   2   1   2   1   1   1   1   1   2   2 
## 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 
##   1   1   1   1   1   1   1   1   1   1   2   1   1   2   1   1   2   2   1   2 
## 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 
##   1   1   1   1   2   1   1   1   1   1   2   1   2   2   2   1   2   1   2   1 
## 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 
##   2   2   2   1   2   2   1   1   1   1   1   1   2   1   2   1   1   2   1   1 
## 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 
##   2   1   2   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 
##   2   1   2   1   1   1   1   1   1   1   1   1   1   1   1   1   1   2   1   1 
## 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 
##   1   2   1   2   1   1   1   1   1   1   1   1   1   1   1   2   1   2   1   2 
## 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 
##   1   1   1   2   1   1   1   1   1   1   1   1   2   1   1   1   1   1   1   1 
## 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 
##   1   1   1   1   1   2   2   1   2   2   1   1   2   2   1   1   1   1   1   1 
## 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 
##   1   1   1   1   1   1   1   1   1   2   1   1   2   2   1   1   1   1   1   1 
## 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 
##   2   1   1   1   1   1   1   1   2   1   1   1   1   1   1   1   1   2   1   1 
## 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 
##   1   1   1   1   1   1   1   1   1   1   1   1   2   2   1   1   1   1   1   1 
## 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 
##   1   2   1   1   2   1   2   1   1   2   1   2   1   1   1   1   1   1   1   1 
## 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 
##   2   2   1   1   1   1   1   1   2   1   1   1   1   1   1   1   1   1   1   1 
## 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 
##   1   1   1   1   1   1   1   2   1   1   1   2   2   1   1   1   1   1   2   2 
## 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 
##   1   1   1   2   1   1   1   1   1   1   1   1   1   1   1   1   2   2   1   1 
## 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 
##   1   2   1   1   1   1   1   1   1   1   1   1   1   2   1   2   1   1   1   1 
## 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 561 562 563 564 565 566 567 568 569 
##   1   1   1   2   2   2   1   2   1

3.3 Visualization

Lets visualize this graph so that we can create something that is human understandable, and something that might allow us to draw conclusions. Maybe the visualization will allow us to quickly see some top candidates for who we might want to recruit.

# Subset the 1st row from Best.nc and convert it 
# to a data frame so ggplot2 can plot it.

freq_k_cancer = nbclust_obj_cancer$Best.nc[1,]
freq_k_cancer = data.frame(freq_k_cancer)


# Check the maximum number of clusters suggested.
max(freq_k_cancer)
## [1] 14
# Plot as a histogram.
ggplot(freq_k_cancer,
       aes(x = freq_k_cancer)) +
  geom_bar() +
  scale_x_continuous(breaks = seq(0, 15, by = 1)) +
  scale_y_continuous(breaks = seq(0, 12, by = 1)) +
  labs(x = "Number of Clusters",
       y = "Number of Votes",
       title = "Cluster Analysis")

#Cluster # 2 got the most votes

3.4 Nbclust

# Now we are going to build a simple decision tree using the clusters as a feature

# reminder this is our model, using 3 clusters 
set.seed(1980)
kmeans_obj_cancer = kmeans(clust_data_cancer, centers = 2,
                        algorithm = "Lloyd")

# this is the output of the model. 
kmeans_obj_cancer$cluster
##   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20 
##   2   2   2   1   2   1   2   1   1   1   1   2   2   1   1   1   1   2   2   1 
##  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40 
##   1   1   1   2   2   2   1   2   2   2   2   1   2   2   2   2   1   1   1   1 
##  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60 
##   1   1   2   1   1   2   1   1   1   1   1   1   1   2   1   1   2   1   1   1 
##  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80 
##   1   1   1   1   1   1   1   1   1   1   2   1   2   1   1   2   1   2   2   1 
##  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99 100 
##   1   1   2   2   1   2   1   2   1   1   1   1   1   1   1   2   1   1   1   1 
## 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 
##   1   1   1   1   1   1   1   1   2   1   1   1   1   1   1   1   1   1   2   2 
## 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 
##   1   2   2   1   1   1   1   2   1   2   1   1   1   1   2   1   1   1   1   1 
## 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 
##   1   2   1   1   1   1   1   1   1   1   1   1   1   1   1   1   2   1   1   1 
## 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 
##   1   2   2   1   2   1   1   2   2   1   1   1   1   1   1   1   1   1   1   1 
## 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 
##   2   2   2   1   1   1   2   1   1   1   1   1   1   1   1   1   1   2   2   1 
## 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 
##   1   2   2   1   1   1   1   2   1   1   2   1   2   1   1   1   1   1   2   2 
## 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 
##   1   1   1   1   1   1   1   1   1   1   2   1   1   2   1   1   2   2   1   2 
## 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 
##   1   1   1   1   2   1   1   1   1   1   2   1   2   2   2   1   2   1   2   1 
## 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 
##   2   2   2   1   2   2   1   1   1   1   1   1   2   1   2   1   1   2   1   1 
## 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 
##   2   1   2   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 
##   2   1   2   1   1   1   1   1   1   1   1   1   1   1   1   1   1   2   1   1 
## 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 
##   1   2   1   2   1   1   1   1   1   1   1   1   1   1   1   2   1   2   1   2 
## 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 
##   1   1   1   2   1   1   1   1   1   1   1   1   2   1   1   1   1   1   1   1 
## 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 
##   1   1   1   1   1   2   2   1   2   2   1   1   2   2   1   1   1   1   1   1 
## 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 
##   1   1   1   1   1   1   1   1   1   2   1   1   2   2   1   1   1   1   1   1 
## 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 
##   2   1   1   1   1   1   1   1   2   1   1   1   1   1   1   1   1   2   1   1 
## 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 
##   1   1   1   1   1   1   1   1   1   1   1   1   2   2   1   1   1   1   1   1 
## 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 
##   1   2   1   1   2   1   2   1   1   2   1   2   1   1   1   1   1   1   1   1 
## 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 
##   2   2   1   1   1   1   1   1   2   1   1   1   1   1   1   1   1   1   1   1 
## 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 
##   1   1   1   1   1   1   1   2   1   1   1   2   2   1   1   1   1   1   2   2 
## 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 
##   1   1   1   2   1   1   1   1   1   1   1   1   1   1   1   1   2   2   1   1 
## 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 
##   1   2   1   1   1   1   1   1   1   1   1   1   1   2   1   2   1   1   1   1 
## 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 
##   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 
## 561 562 563 564 565 566 567 568 569 
##   1   1   1   2   2   2   1   2   1
cancer_normalize$clusters = kmeans_obj_cancer$cluster

clusters = as.factor(kmeans_obj_cancer$cluster)

#View(clusters)

cancer_cluster = cbind(clust_data_cancer,clusters)

#View(cancer_cluster)

3.5 3D Visualization

# Lets visualize this

#view(cancer_cluster)

radius = cancer_kmeans$radius_mean

diagnosis = cancer_kmeans$diagnosis

cancer_cluster = cbind(cancer_cluster, radius)
cancer_cluster = cbind(cancer_cluster, diagnosis)
view(cancer_cluster)
# We can visualize votes in 3D with the following code.
# We're essentially creating a visualization that takes into consideration
# each of the players points, assists, and minutes played, all on a 3d graph.
# This allows us to get a sense of which players have a high ratio of what I
# might describe as "efficiency rate," and it will color the dots on the 3d graph
# by their salary. Using this, we can look for the darker colored (lower paid) players
# who are higher up up the graph, which means that they're highly efficient and also
# not paid very much 
fig = plot_ly(cancer_cluster,
               type = "scatter3d",
               mode = "markers", 
               symbol = ~clusters,
               x = ~perimeter_mean,
               y = ~concave.points_mean,
               z = ~compactness_mean,
               color = ~cancer_cluster$radius,
               text = ~paste('Diagnosis:', diagnosis))

fig

This k-means clustered 3D scatterplot does a decent job visualizing the data, and giving us an idea of how well variables like concavity and perimeter can predict variables such as radius. As the colors get brighter and the points make their way to the top, the classification is much more likely to be malignant than benign. This gives us the feeling that our models should be able to predict diagnosis pretty well.

4 Model Building

4.1 Random Forest

For context, Random forest creates random uncorrelated decision trees that considers randomness and bagging to deliver a classification prediction based in the aggregate outcomes of individual binary splits and terminal nodes. There is an exhaustive search over all variables and possible split points to find the split point that best reduces the node impurity. Then, the ideal split is set and this process repeats in the left and right leaves in turn, recursively, until some stopping rules are met.

First, we will run a random forest and assess its base merits and shortfalls. After some tuning, we expect to achieve small false negatives with a more realistic split in diagnosis status (e.g. 12% of women are expected to develop malignant breast cancer, not 38%). We discussed taking a subset of the test set that intentionally oversamples the minority class. We do this to find the minimum proportion needed for a training set that allows for a defensible false negative rate in eventual prediction. First, we must understand the quality of the model with a somewhat balanced set:

4.1.1 Row Partition

sample_rows = 1:nrow(cancer)
#sample_rows
set.seed(1984) #sample(x, size, replace = FALSE, prob = NULL)
test_rows = sample(sample_rows,
                   dim(cancer)[1]*.10,
                   replace = FALSE)
cancer_train = cancer[-test_rows,]
cancer_test = cancer[test_rows,]
dim(cancer)
## [1] 569  31
###### for tune
set.seed(1984) #sample(x, size, replace = FALSE, prob = NULL)
test_rows = sample(sample_rows,
                   dim(cancer_test)[1]*.50, 
                   replace = FALSE)# We don't want duplicate samples
tune = cancer_test[-test_rows,]
test = cancer_test[test_rows,]
set.seed(1984) #sample(x, size, replace = FALSE, prob = NULL)
test_rows = sample(sample_rows,
                   dim(tune)[1]*.50, 
                   replace = FALSE)# We don't want duplicate samples
x_tune = tune[-test_rows,]
y_tune = tune[test_rows,]

4.1.2 Mytry Tuning

A base model randomly selects mtry variables from the set of predictors available. This causes each split to have a different random set of variables selected within. Here, we aim to improve model accuracy by determining the ideal number of features randomly sampled as candidates at each split with the random forest. The square root of total features within the set serves a rough rule of thumb for this value. In this case, the value came out to 5.477.

mytry_tune <- function(x){
  xx <- dim(x)[2]-1
  sqrt(xx)
}
mytry_tune(cancer) #5.477226
## [1] 5.477226

4.1.3 Random Forest Model #1

set.seed(2023)  
cancer_RF = randomForest((diagnosis)~.,        
                            cancer_train,    
                            #y = NULL,         
                            #subset = NULL,    
                            #xtest = NULL,  
                            #ytest = NULL,  
                            ntree = 2000,        
                            mtry = 5,           
                            replace = TRUE,   
                            #classwt = NULL, 
                            #strata = NULL,    
                            sampsize = 100,  
                            nodesize = 5,      
                            #maxnodes = NULL,  
                            importance = TRUE, 
                            #localImp = FALSE, 
                            proximity = FALSE,   
                            norm.votes = TRUE, 
                            do.trace = TRUE,   
                            keep.forest = TRUE, 
                            keep.inbag = TRUE)   
cancer_RF
cancer_RF$call

4.1.4 Confusion Matrix (#1)

cancer_RF$confusion
##     0   1 class.error
## 0 314   9  0.02786378
## 1  15 175  0.07894737
print(cancer_RF)
## 
## Call:
##  randomForest(formula = (diagnosis) ~ ., data = cancer_train,      ntree = 2000, mtry = 5, replace = TRUE, sampsize = 100, nodesize = 5,      importance = TRUE, proximity = FALSE, norm.votes = TRUE,      do.trace = TRUE, keep.forest = TRUE, keep.inbag = TRUE) 
##                Type of random forest: classification
##                      Number of trees: 2000
## No. of variables tried at each split: 5
## 
##         OOB estimate of  error rate: 4.68%
## Confusion matrix:
##     0   1 class.error
## 0 314   9  0.02786378
## 1  15 175  0.07894737

The Out of Bag error rate is 4.87% for this number of trees. As we tried the rough estimate of finding the mtry, we will try random search method to find better mtry value for the second model.

cancer_predict = predict(cancer_RF,    
                            cancer_test,      
                            type = "response",   
                            predict.all = TRUE, 
                            proximity = FALSE)  
#cancer_predict
cancer_test_pred = data.frame(cancer_test, 
                                 Prediction = cancer_predict$aggregate)



# Create the confusion matrix.
cancer_test_matrix_RF = table(cancer_test_pred$diagnosis, 
                            cancer_test_pred$Prediction)

cancer_test_matrix_RF
##    
##      0  1
##   0 34  0
##   1  1 21
# Calculate the misclassification or the error rate.
cancer_test_error_rate_RF = sum(cancer_test_matrix_RF[row(cancer_test_matrix_RF) != 
                                                    col(cancer_test_matrix_RF)]) / 
  sum(cancer_test_matrix_RF)

cancer_test_error_rate_RF
## [1] 0.01785714
confusionMatrix(cancer_test_pred$Prediction,cancer_test_pred$diagnosis,positive = "1", 
                dnn=c("Prediction", "Actual"), mode = "everything")
## Confusion Matrix and Statistics
## 
##           Actual
## Prediction  0  1
##          0 34  1
##          1  0 21
##                                           
##                Accuracy : 0.9821          
##                  95% CI : (0.9045, 0.9995)
##     No Information Rate : 0.6071          
##     P-Value [Acc > NIR] : 2.724e-11       
##                                           
##                   Kappa : 0.9623          
##                                           
##  Mcnemar's Test P-Value : 1               
##                                           
##             Sensitivity : 0.9545          
##             Specificity : 1.0000          
##          Pos Pred Value : 1.0000          
##          Neg Pred Value : 0.9714          
##               Precision : 1.0000          
##                  Recall : 0.9545          
##                      F1 : 0.9767          
##              Prevalence : 0.3929          
##          Detection Rate : 0.3750          
##    Detection Prevalence : 0.3750          
##       Balanced Accuracy : 0.9773          
##                                           
##        'Positive' Class : 1               
## 

The confusion matrix shows that the accuracy is very high with 0.98, the kappa is at 0.96, and F1 is at 0.9767. As our data is imbalanced, this is a good indication that our model is working well.

cancer_RF_acc = sum(cancer_RF$confusion[row(cancer_RF$confusion) == 
                                                col(cancer_RF$confusion)]) / 
  sum(cancer_RF$confusion)
cancer_RF_acc
## [1] 0.9530179

The accuracy of the first random forest model came out to be 95.3%.

4.1.5 inbag argument

# The "inbag" argument shows you which data point is included in which trees.
str(as.data.frame(cancer_RF$inbag))

#View(as.data.frame(cancer_RF$inbag))
inbag <- as.data.frame(cancer_RF$inbag)
sum(inbag[,2000])
dim(cancer_RF$inbag)

4.1.6 General Error Rate

err.rate <- as.data.frame(cancer_RF$err.rate)
err.rate[2000,]
##             OOB          0          1
## 2000 0.04678363 0.02786378 0.07894737

4.1.7 Visualizing the Result

cancer_RF_error = data.frame(1:nrow(cancer_RF$err.rate),
                                cancer_RF$err.rate)

colnames(cancer_RF_error) = c("Number of Trees", "Out of the Box",
                                 "Benign", "Malignant")
# Add another variable that measures the difference between the error rates, in
# some situations we would want to minimize this but need to use caution because
# it could be that the differences are small but that both errors are really high,
# just another point to track. 
cancer_RF_error$Diff <- cancer_RF_error$`Benign`-cancer_RF_error$`Malignant`
#View(cancer_RF_error)
#rm(fig)
fig <- plot_ly(x=cancer_RF_error$`Number of Trees`, y=cancer_RF_error$Diff,name="Diff", type = 'scatter', mode = 'lines')
fig <- fig %>% add_trace(y=cancer_RF_error$`Out of the Box`, name="OOB_Er")
fig <- fig %>% add_trace(y=cancer_RF_error$`Malignant`, name="Malignant")
fig <- fig %>% add_trace(y=cancer_RF_error$`Benign`, name="Benign")
fig

After observing the ouput, we can see a fairly stable region after around 1500 trees, so for the second model we will set our ntree parameter to 1500.

4.1.8 Analysis and Tuning

Our model is working good based on the confusion matrix. However, we want to tune our model with better mtry and number of trees to lower the OOB estimate of error rate and have more accurate more.

# Random Search
control <- trainControl(method="repeatedcv", number=10, repeats=3, search="random")
set.seed(2023)
metric <- "Accuracy"
mtry <- sqrt(ncol(cancer))
rf_random <- train(diagnosis~., data=cancer, method="rf", metric=metric, tuneLength=15, trControl=control)
print(rf_random)
## Random Forest 
## 
## 569 samples
##  30 predictor
##   2 classes: '0', '1' 
## 
## No pre-processing
## Resampling: Cross-Validated (10 fold, repeated 3 times) 
## Summary of sample sizes: 512, 512, 512, 512, 511, 512, ... 
## Resampling results across tuning parameters:
## 
##   mtry  Accuracy   Kappa    
##    1    0.9602393  0.9138950
##    2    0.9620351  0.9178108
##    5    0.9596650  0.9127717
##    8    0.9602598  0.9141011
##    9    0.9614399  0.9167936
##   12    0.9614298  0.9165645
##   15    0.9637690  0.9217663
##   17    0.9643433  0.9229718
##   19    0.9596952  0.9131040
##   26    0.9585152  0.9104309
##   29    0.9596747  0.9130319
## 
## Accuracy was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 17.
plot(rf_random)

We did random search to find mtry, and the plot shows that at mtry 17 we have the highest accuracy and kappa. So, we will tune our model with new mtry and number of trees.

4.1.9 Random Forest #2

Given the results, we decided to implement some improvements to our model. We started by modifying the ntree value, decreasing it from 2000 to 1500. Next, we increased the mtry to 17 from 5.

(maybe delete, discuss) Further variable interactions stabilize at a slower rate than error, and given our large number of independent variables, we decided to take a make the new ntree value an odd number so ties can be broken.

4.1.10 Random Forest #2 Model

set.seed(2023)  
cancer_RF2 = randomForest((diagnosis)~.,      
                            cancer_train,    
                            #y = NULL,      
                            #subset = NULL,     
                            #xtest = NULL,     
                            #ytest = NULL,     
                            ntree = 1500,     
                            mtry = 17,         
                            replace = TRUE,   
                            #classwt = NULL,  
                            #strata = NULL,  
                            sampsize = 100,  
                            nodesize = 5,   
                            #maxnodes = NULL, 
                            importance = TRUE, 
                            #localImp = FALSE,  
                            proximity = FALSE, 
                            norm.votes = TRUE,  
                            do.trace = TRUE,   
                            keep.forest = TRUE, 
                            keep.inbag = TRUE) 
cancer_RF2
## 
## Call:
##  randomForest(formula = (diagnosis) ~ ., data = cancer_train,      ntree = 1500, mtry = 17, replace = TRUE, sampsize = 100,      nodesize = 5, importance = TRUE, proximity = FALSE, norm.votes = TRUE,      do.trace = TRUE, keep.forest = TRUE, keep.inbag = TRUE) 
##                Type of random forest: classification
##                      Number of trees: 1500
## No. of variables tried at each split: 17
## 
##         OOB estimate of  error rate: 4.68%
## Confusion matrix:
##     0   1 class.error
## 0 314   9  0.02786378
## 1  15 175  0.07894737
cancer_RF2$call
## randomForest(formula = (diagnosis) ~ ., data = cancer_train, 
##     ntree = 1500, mtry = 17, replace = TRUE, sampsize = 100, 
##     nodesize = 5, importance = TRUE, proximity = FALSE, norm.votes = TRUE, 
##     do.trace = TRUE, keep.forest = TRUE, keep.inbag = TRUE)
cancer_RF2$confusion
##     0   1 class.error
## 0 314   9  0.02786378
## 1  15 175  0.07894737

We can see that the OOB estimate of error rate decreased to 4.68% from 4.87%. This is our desired outcome, so we will be create confusion matrix to observe the performance of the model.

4.1.11 Confusion Matrix #2

cancer_RF_acc2 = sum(cancer_RF2$confusion[row(cancer_RF2$confusion) == 
                                                col(cancer_RF2$confusion)]) / 
  sum(cancer_RF2$confusion)
cancer_RF_acc2
## [1] 0.9530179
# The accuracy of this model is 0.8404
#### Random forest output ####
#View(as.data.frame(cancer_RF$votes))
# The "inbag" argument shows you which data point is included in which trees.
str(as.data.frame(cancer_RF2$inbag))
print(cancer_RF2)
## 
## Call:
##  randomForest(formula = (diagnosis) ~ ., data = cancer_train,      ntree = 1500, mtry = 17, replace = TRUE, sampsize = 100,      nodesize = 5, importance = TRUE, proximity = FALSE, norm.votes = TRUE,      do.trace = TRUE, keep.forest = TRUE, keep.inbag = TRUE) 
##                Type of random forest: classification
##                      Number of trees: 1500
## No. of variables tried at each split: 17
## 
##         OOB estimate of  error rate: 4.68%
## Confusion matrix:
##     0   1 class.error
## 0 314   9  0.02786378
## 1  15 175  0.07894737

We can see that the OOB estimate of error rate decreased to 4.68% from 4.87%. This is our desired outcome, so we will be create confusion matrix to observe the performance of the model.

cancer_predict2 = predict(cancer_RF2,    
                            cancer_test,      
                            type = "response",   
                            predict.all = TRUE, 
                            proximity = FALSE)  
#cancer_predict
cancer_test_pred2 = data.frame(cancer_test, 
                                 Prediction = cancer_predict$aggregate)


# Create the confusion matrix.
cancer_test_matrix_RF2 = table(cancer_test_pred2$diagnosis, 
                            cancer_test_pred2$Prediction)

cancer_test_matrix_RF2
##    
##      0  1
##   0 34  0
##   1  1 21
confusionMatrix(cancer_test_pred2$Prediction,cancer_test_pred2$diagnosis,positive = "1", 
                dnn=c("Prediction", "Actual"), mode = "everything")
## Confusion Matrix and Statistics
## 
##           Actual
## Prediction  0  1
##          0 34  1
##          1  0 21
##                                           
##                Accuracy : 0.9821          
##                  95% CI : (0.9045, 0.9995)
##     No Information Rate : 0.6071          
##     P-Value [Acc > NIR] : 2.724e-11       
##                                           
##                   Kappa : 0.9623          
##                                           
##  Mcnemar's Test P-Value : 1               
##                                           
##             Sensitivity : 0.9545          
##             Specificity : 1.0000          
##          Pos Pred Value : 1.0000          
##          Neg Pred Value : 0.9714          
##               Precision : 1.0000          
##                  Recall : 0.9545          
##                      F1 : 0.9767          
##              Prevalence : 0.3929          
##          Detection Rate : 0.3750          
##    Detection Prevalence : 0.3750          
##       Balanced Accuracy : 0.9773          
##                                           
##        'Positive' Class : 1               
## 

By observing the confusion matrix, we can tell that the model is not improved, but remained as exactly same as previous one. Our hypothesis is that our model was already working great with high performance. Since this dataset is for cancer breast cancer, we would rather have false positive than false negative. So, our model classifying more false negative than the false positive is not ideal.

  • maybe remove this part

F1 illustrates the harmonic mean between precision and recall, and it reaches its optimum 1 only if precision and recall are both at 100%. This serves as a more holistic analysis of our model than accuracy values, especially considering the incidence rate of benign cases in the dataset. Further, the purpose of our project involves mitigating false negatives, and this score in its base forms weights classification errors equally, which is not aligned with real preferences. So, in addition to our F1 score, we also implement an F Beta Score, a value with custom weighting of precision and recall. There is additional weight on recall given the severity of false negatives in classification. This is achieved by moving up the beta value to 2, which is the commonly used beta value when recall is preferred over precision.

4.1.12 In bag

#View(as.data.frame(cancer_RF2$inbag))
inbag <- as.data.frame(cancer_RF2$inbag)
sum(inbag[,1500])
dim(cancer_RF2$inbag)
print(cancer_RF2)
## 
## Call:
##  randomForest(formula = (diagnosis) ~ ., data = cancer_train,      ntree = 1500, mtry = 17, replace = TRUE, sampsize = 100,      nodesize = 5, importance = TRUE, proximity = FALSE, norm.votes = TRUE,      do.trace = TRUE, keep.forest = TRUE, keep.inbag = TRUE) 
##                Type of random forest: classification
##                      Number of trees: 1500
## No. of variables tried at each split: 17
## 
##         OOB estimate of  error rate: 4.68%
## Confusion matrix:
##     0   1 class.error
## 0 314   9  0.02786378
## 1  15 175  0.07894737

4.1.13 General Error Rate

err.rate2 <- as.data.frame(cancer_RF2$err.rate)
err.rate2[1500,]
##             OOB          0          1
## 1500 0.04678363 0.02786378 0.07894737
#cancer_RF$confusion
#cancer_RF2$confusion

4.1.14 Visualizing the Result 2

#### Visualize random forest results ####
# Let's visualize the results of the random forest.
# Let's start by looking at how the error rate changes as we add more trees.
cancer_RF_error2 = data.frame(1:nrow(cancer_RF2$err.rate),
                                cancer_RF2$err.rate)
#View(cancer_RF_error2)
colnames(cancer_RF_error2) = c("Number of Trees", "Out of the Box",
                                 "Benign", "Malignant")
# Add another variable that measures the difference between the error rates, in
# some situations we would want to minimize this but need to use caution because
# it could be that the differences are small but that both errors are really high,
# just another point to track. 
cancer_RF_error2$Diff <- cancer_RF_error2$`Benign`-cancer_RF_error2$`Malignant`
#View(cancer_RF_error)
#rm(fig)
fig2 <- plot_ly(x=cancer_RF_error2$`Number of Trees`, y=cancer_RF_error2$Diff,name="Diff", type = 'scatter', mode = 'lines')
fig2 <- fig2 %>% add_trace(y=cancer_RF_error2$`Out of the Box`, name="OOB_Er")
fig2 <- fig2 %>% add_trace(y=cancer_RF_error2$`Malignant`, name="Malignant")
fig2 <- fig2 %>% add_trace(y=cancer_RF_error2$`Benign`, name="Benign")
fig2

4.1.15 Analysis beyond Accuracy

#varImpPlot(cancer_RF2,     #<- the randomForest model to use
#           sort = TRUE,        #<- whether to sort variables by decreasing order of importance
#           n.var = 10,        #<- number of variables to display
#           main = "Important Factors for Diagnosis",
           #cex = 2,           #<- size of characters or symbols
#           bg = "white",       #<- background color for the plot
#           color = "blue",     #<- color to use for the points and labels
#           lcolor = "orange")  #<- color to use for the horizontal lines
#pred1 <- predict(cancer_RF,type = "prob")

#Traditional F1 Score
#F1 <- F1_Score(cancer$diagnosis,pred1[1:569])
#F1

# F Beta Score with custom weighting of precision and recall: there is additional weight on recall given the severity of false negatives in classification. This is achieved by moving up the beta value to 2, the common beta value when recall is preferred over precision
#FBeta <- FBeta_Score(cancer$diagnosis,pred1[1:569], positive = 1, beta = 2)

#FBeta

# variable importance plot 
variableimportance <- varImpPlot(cancer_RF2, sort = TRUE,main = "Variable Importance scale", type = 2)

The mean decrease in Gini coefficient is a measure of how each variable contributes to the homogeneity of the nodes and leaves in the resulting random forest. The higher the value of mean decrease Gini score, the higher the importance of the variable in the model. It basically represents the mean decrease in node impurity (and not the mean decrease in accuracy).

5 Conclusions/Limitations